Introduction

This report describes the results of the first priming mindfulness study. It was initially made using Dominique Makowski’s Supplementary Materials template. Two similar templates will be created based on the two online pilot studies from the Varela project.

This is an exploratory (not confirmatory) study. The result of this exploration will be used to conduct a second, confirmatory, preregistered study.


Note also that this data has been cleaned beforehand. Five datasets were merged (joined) through an inner join—3 Qualtrics surveys and 2 Inquisit tasks—so as to keep only participants who at least participated at each step of the study. Missing data will be imputed later on. Duplicates were addressed with the rempsyc::best_duplicate function, which keeps the duplicate with the least amount of missing values, and in case of ties, takes the first occurrence.

Packages & Data

Packages

library(rempsyc)
library(dplyr)
library(interactions)
library(performance)
library(see)
library(ggplot2)
library(report)
library(bestNormalize)
library(psych)
library(visdat)
library(missForest)
library(doParallel)

summary(report(sessionInfo()))

The analysis was done using the R Statistical language (v4.2.1; R Core Team, 2022) on Windows 10 x64, using the packages iterators (v1.0.14), doParallel (v1.0.17), interactions (v1.1.5), performance (v0.10.0.10), see (v0.7.3), report (v0.5.5.1), foreach (v1.5.2), bestNormalize (v1.8.3), psych (v2.2.9), missForest (v1.5), rempsyc (v0.1.0.3), visdat (v0.5.3), ggplot2 (v3.3.6) and dplyr (v1.0.10).

Data

# Read data
data <- read.table("data/fulldataset.txt", sep = "\t", header = TRUE)

# Dummy-code group variable
data <- data %>% 
  mutate(condition_dum = ifelse(condition == "Mindfulness", 1, 0),
         condition = as.factor(condition))

cat(report_participants(data, threshold = 1))

284 participants (Gender: 53.5% women, 30.6% men, 0.00% non-binary, 15.85% missing; Country: 94.01% USA, 1.41% Canada, 1.41% India, 1.41% missing, 1.76% other)

# Allocation ratio
report(data$condition)

x: 2 levels, namely Control (n = 143, 50.35%) and Mindfulness (n = 141, 49.65%)

Taylor Task…

At this stage, we define a list of our relevant variables.

Preparation

# Make list of DVs
col.list <- c("blastintensity", "blastduration", "blastintensity.duration",
              "blastintensity.first", "blastduration.first", 
              "blastintensity.duration.first", "all50trials",
              "taylor_hugues", "taylor_lindsay", "taylor_carnagey", "KIMS", 
              "BSCS", "BAQ", "SOPT", "IAT")

Blast Intensity * Duration

Why combine the intensity and duration scores? Should we? For a discussion, see:

Elson, M., Mohseni, M. R., Breuer, J., Scharkow, M., & Quandt, T. (2014). Press CRTT to measure aggressive behavior: the unstandardized use of the competitive reaction time task in aggression research. Psychological assessment, 26(2), 419. https://doi.org/10.1037/a0035569

  • Bushman and Baumeister (1998) used the sum of volume and duration settings in the first of 25 trials [p. 3]
  • Lindsay and Anderson (2000) multiplied volume with log-transformed duration settings. The average over 25 trials of those products was their measure for overall aggression.
  • Carnagey and Anderson (2005) averaged the products of volume and the square root of duration to form a single “aggressive energy score” (p. 887). No reason is given for this other than the claim that this single score supposedly is a valid measure and that duration should be square rooted.
  • Bartholow, Sestir, and Davis (2005) multiplied the average volume and duration settings to form a composite aggressive behavior score. Although Bartholow, Bushman, and Sestir (2006) also used volume and duration settings, they standardized and summed the two parameters instead of multiplying them.
  • Sometimes the option of setting the volume and/or duration to zero as a way to act nonaggressively is provided. Including settings of zero as an option also raises further questions, for example, how to handle trials in which participants set only one of the two intensity parameters to zero. [Note: we do have zero as option]
  • With regard to the analysis, there is no definitive answer to the question of how to calculate aggression scores, or whether different scores might measure different types of aggression, as long as none of them have been properly validated. As it seems that volume and duration do not measure the exact same construct, it is advisable to consider them as separate measures for related subdimensions of aggression.

First sound blast

Why use the first sound blast only instead of the average of all trials? Should we?

According to some, the Taylor Aggression Paradigm is not a measure of aggression per say, but of reactive aggression, because participants react to the other “participant’s” aggression. They suggest that for a pure measure of aggression, it is recommended to use only the first sound blast used by the participant before he receives one himself. At this stage, we attempt the analyses with all these different measures of aggression for exploratory purposes. See earlier reference to Elson et al. (2014):

  • If researchers are interested in measuring unprovoked aggression, they should also look at the settings in the first trial. Those studying provoked aggression or retaliation, on the other hand, should focus on all trials except the first one.

Problems in measurement

We note that Lobbestael et al. (2021) suggests, based on factor analysis, separately averaging all preprovocation versus all postprovocation trials. However, this recommendation applies to the 30-trial version (we have the 25 trial version). They add:

  • In contrast to the 30-trial version, the 25-trial CRTT version showed to be underlined by one single factor, clustering the duration and volume of both provoked and unprovoked trials, irrespective of whether these were preceded by win or lose experiences.

Lobbestael, J., Emmerling, F., Brugman, S., Broers, N., Sack, A. T., Schuhmann, T., … & Arntz, A. (2021). Toward a more valid assessment of behavioral aggression: An open source platform and an empirically derived scoring method for using the Competitive Reaction Time Task (CRTT). Assessment, 28(4), 1065-1079. https://doi.org/10.1177/1073191120959757

Therefore, it is assumed safe to use and combine all 25 trials. However, we also note (again) the very high heterogeneity in quantification strategies:

  • Why the Flexibility is Problematic There are currently 130 publications in which results are based on the CRTT, and they reported 157 different quantification strategies in total! Given the number of different versions of the CRTT measure that can be extracted from its use in a study, it is very easy for a researcher to analyze several (or several dozen) versions of the CRTT outcome measures in a study, running hypothesis tests with one version of the measure after another until a version is found that produces the desired pattern of results. Given that the measure has been used in several dozen different ways in the published research, and often in multiple ways by the same authors (and sometimes even in multiple ways for different analyses within the same paper), it appears likely that selective reporting of results after exploring analyses with multiple versions of the CRTT is not uncommon. Even if multiple quantifications used are reported in a paper, it is often not clear how to interpret the findings given that in many cases one quantification strategy does not have greater validity than others.

See: https://www.flexiblemeasures.com/crtt/

Thus the recommandations are:

  1. Preregistration of Analysis Plan
  2. Preregistration of Decision Tree (e.g., regarding data transformation)
  3. Defense of Selected Quantification Strategy

See: https://www.flexiblemeasures.com/crtt/index.php?menu=recommendations

Average of all 50 trials

After discussion with Dr. David Chester, it was also suggested to use the average of all 50 standardized trials, as in Chester & Lasko 2019 and Lasko & Chester (2022).

Chester & Lasko (2019). Validating a standardized approach to the Taylor Aggression Paradigm. Social Psychological and Personality Science, 10(5), 620-631. https://doi.org/10.1177/1948550618775408

Lasko & Chester (2022). Measurement invariance and item response theory analysis of the taylor aggression paradigm. Assessment, 29(5), 981-992. https://doi.org/10.1177/1073191121996450

Still, he also notes,

In one of our papers, we show that the major approaches to scoring the CRTT typically arrive at the same result, so in the end, the scoring strategy you choose is unlikely to have a large effect on your findings.

That said, technically speaking, if a participant sets the intensity to 10, but duration to 0, there is no actual sound blast for that trial. Similarly, if a participant sets the duration to 10, but intensity to 0, there is no actual sound blast for that trial. Taking the product of intensity and duration takes this dynamic into account. In contrast, other methods using the sum of intensity and duration, or yet again the average of all 50 trials (intensity and duration) does not. That said, perhaps it is not a big deal given that this particular scenario (including 0 for one of the two settings) is probably rare.

Edit: Following another discussion, Dr. Chester pointed out that it is possible to measure how often this scenario of mismatching zero intensity occurs. Let’s test this right here.

# Blastintensity == 0
data %>% 
  filter(blastintensity == 0) %>% 
  summarize(percent = round(sum(blastduration !=0)/nrow(data) * 100, 2))
percent
0.35
# Blastduration == 0
data %>% 
  filter(blastduration == 0) %>% 
  summarize(percent = round(sum(blastintensity !=0)/nrow(data) * 100, 2))
percent
0

So we have 0.35% and 0% of the data in this scenario, respectively.

Dr. Chester also recommends a better (but more complex) approach:

If it’s possible, a superior approach to this aggregate scoring strategy is to employ multilevel modeling instead of a univariate analysis. When you aggregate across all 50 of the individual CRTT measures, you are losing a lot of information/variability/statistical-power. Multilevel modeling on the non-aggregated data allows you to retain this variability.

Although it is not clear so far if this technique can be applied to our particular situation. Further study will be required.

Hugues Leduc Method

Hugues Leduc suggested the possibility to use instead a two-step approach. First, calculate the average of volume and duration, for each trial. In the second step, calculate the average of the 25 trials of this new volume*duration composite. This should result in a score different than simply using the product of the average of all duration trials and of the average of all volume trials. We now add this method to the method comparison.

Talking with Stéphane, we also agreed to test the following two related methods:

The average of 25 composite trial scores, themselves made from the volume multiplied by the:

  • log-transformed duration (Lindsay and Anderson (2000) method)

  • square root duration (Carnagey and Anderson (2005) method)

Data cleaning

In this section, we are preparing the data for analysis: (a) taking care of preliminary exclusions, (b) checking for and exploring missing values, (d) imputing missing data with missForest, (e) computing scale means, and (f) extracting reliability indices for our scales.

Preliminary exclusions

First, we only want to keep those who agreed to keep their participation in the study after the debriefing.

data %>% 
  filter(debrief.consent != 1 | is.na(debrief.consent)) %>% 
    nrow
## [1] 1

There’s 1 person that would be excluded.

data <- data %>%
    filter(debrief.consent != 2)
cat(report_participants(data, threshold = 1))

283 participants (Gender: 53.4% women, 30.7% men, 0.00% non-binary, 15.90% missing; Country: 93.99% USA, 1.41% Canada, 1.41% India, 1.41% missing, 1.77% other)

Second, we know that we only want to keep participants who had at least an 80% success rate in the critical experimental manipulation task. Let’s see how many participants have less than an 80% success rate. Those with missing values for variable manipsuccessleft will also be excluded since they have not completed the critical experimental manipulation in this study.

data %>% 
    summarize(success.80 = sum(manipsuccessleft < .80, na.rm = TRUE),
              is.na = sum(is.na(manipsuccessleft)))
success.80 is.na
18 0

There’s 18 people with success smaller than 80%, let’s exclude them.

data <- data %>% 
    filter(manipsuccessleft >= .80)
cat(report_participants(data, threshold = 1))

265 participants (Gender: 55.8% women, 29.4% men, 0.00% non-binary, 14.72% missing; Country: 95.09% USA, 1.51% missing, 1.13% Canada, 2.26% other)

Explore missing data

Missing items

# Check for nice_na
nice_na(data, scales = c("BSCS", "BAQ", "KIMS"))
var items na cells na_percent na_max na_max_percent all_na
BSCS_1:BSCS_7 7 0 1855 0.00 0 0.00 0
BAQ_1:BAQ_12 12 0 3180 0.00 0 0.00 0
KIMS_1:KIMS_39 39 0 10335 0.00 0 0.00 0
Total 180 7 47700 0.01 2 1.11 0

No missing data for our scales of interest, yeah!

Patterns of missing data

Let’s check for patterns of missing data.

# Smaller subset of data for easier inspection
data %>%
  select(manualworkerId:BAQ_12, IAT, SOPT, 
         condition, manipsuccessleft,
         blastintensity.first:blastduration.first,
         blastintensity, blastduration,
         KIMS_1:KIMS_38) %>%
  vis_miss

Little’s MCAR test

# Let's use Little's MCAR test to confirm
# We have to proceed by "scale" because the function can only
# support 30 variables max at a time
library(naniar)
data %>% 
  select(BSCS_1:BSCS_7) %>% 
  mcar_test
statistic df p.value missing.patterns
0 0 0 1
data %>% 
  select(IAT_values.completed:IAT) %>% 
  mcar_test
statistic df p.value missing.patterns
134.0322 2 0 2
# data %>%
#  select(taylor_values.completed:taylor_expressions.meanrt_target) %>%
#  mcar_test

data %>%
 select(taylor_values.blastduration_9:taylor_values.blastintensity_9) %>%
 mcar_test
statistic df p.value missing.patterns
0 0 0 1
data %>%
 select(taylor_values.blastintensity_18:taylor_values.blastintensity_225) %>%
 mcar_test
statistic df p.value missing.patterns
0 0 0 1
data %>% 
  select(KIMS_1:KIMS_39) %>% 
  mcar_test
statistic df p.value missing.patterns
0 0 0 1

Impute missing data

Here, we impute missing data with the missForest package, as it is one of the best imputation methods. However, we have no missing data (except Country because of a malfunction from Qualtrics’ side), so there is no imputation to do in this case.

Details

Why impute the data? van Ginkel explains,

Regardless of the missingness mechanism, multiple imputation is always to be preferred over listwise deletion. Under MCAR it is preferred because it results in more statistical power, under MAR it is preferred because besides more power it will give unbiased results whereas listwise deletion may not, and under NMAR it is also the preferred method because it will give less biased results than listwise deletion.

van Ginkel, J. R., Linting, M., Rippe, R. C. A., & van der Voort, A. (2020). Rebutting existing misconceptions about multiple imputation as a method for handling missing data. Journal of Personality Assessment, 102(3), 297-308. https://doi.org/10.1080/00223891.2018.1530680

Why missForest? It outperforms other imputation methods, including the popular MICE (multiple imputation by chained equations). You also don’t end up with several datasets, which makes it easier for following analyses. Finally, it can be applied to mixed data types (missings in numeric & categorical variables).

Waljee, A. K., Mukherjee, A., Singal, A. G., Zhang, Y., Warren, J., Balis, U., … & Higgins, P. D. (2013). Comparison of imputation methods for missing laboratory data in medicine. BMJ open, 3(8), e002847. https://doi.org/10.1093/bioinformatics/btr597

Stekhoven, D. J., & Bühlmann, P. (2012). MissForest—non-parametric missing value imputation for mixed-type data. Bioinformatics, 28(1), 112-118. https://doi.org/10.1093/bioinformatics/btr597

# Need character variables as factors
# "Error: Can not handle categorical predictors with more than 53 categories."
# So we have to temporarily remove IDs also...
new.data <- data %>% 
  select(-c(manualworkerId, embeddedworkerId, Gender)) %>% 
  mutate(across(where(is.character), as.factor))

# Parallel processing
registerDoParallel(cores = 4)

# Variables
set.seed(100)
data.imp <- missForest(new.data, verbose = TRUE, parallelize = "variables")
# Total time is 2 sec (4*0.5) - 4 cores

# Extract imputed dataset
new.data <- data.imp$ximp

There are some variables we don’t actually want to impute, like country. We want to keep those NAs in that case. Let’s add them back. We also want to add ID back.

# Add ID
new.data <- bind_cols(manualworkerId = data$manualworkerId, new.data)

# Add back the NAs in country
data <- new.data %>% 
  mutate(country = data$country,
         gender = data$Gender)

Scale Means

Now that we have imputed the missing data, we are ready to calculate our scale means.

Self-control

# Reverse code items 2, 4, 6, 7
data <- data %>% 
  mutate(across(starts_with("BSCS"), .names = "{col}r"))
data <- data %>% 
  mutate(across(c(BSCS_2, BSCS_4, BSCS_6, BSCS_7), ~nice_reverse(.x, 5), .names = "{col}r"))

# Get mean BSCS
data <- data %>% 
  mutate(BSCS = rowMeans(select(., BSCS_1r:BSCS_7r)))

Trait aggression

# Reverse code item 7
data <- data %>% 
  mutate(across(starts_with("BAQ"), .names = "{col}r"))
data <- data %>% 
  mutate(across(BAQ_7, ~nice_reverse(.x, 7), .names = "{col}r"))

# Get sum of BAQ
data <- data %>% 
  mutate(BAQ = rowMeans(select(., BAQ_1r:BAQ_12r)))

Trait Mindfulness

# Reverse code items 3-4, 8, 11-12, 14, 16, 18, 20, 22, 23-24, 27-28, 31-32, 35-36
data <- data %>% 
  mutate(across(starts_with("KIMS"), .names = "{col}r"))
data <- data %>% 
  mutate(across(all_of(paste0("KIMS_", c(3:4, 8, 11:12, 14, 16, 18, 20,
                                         22:24, 27:28, 31:32, 35:36))), 
                ~nice_reverse(.x, 5), .names = "{col}r"))

# Get sum of KIMS
data <- data %>% 
  mutate(KIMS = rowMeans(select(., KIMS_1r:KIMS_39r)))

Intensity * Duration

# Create new variable blastintensity.duration
data <- data %>% 
  mutate(blastintensity.duration = blastintensity * blastduration,
         blastintensity.duration.first = 
           blastintensity.first * blastduration.first,
         .after = blastduration)

Taylor all trials

# Standardize and center all trials
data <- data %>%
  mutate(across(
    taylor_values.blastduration_ms_9:taylor_values.blastintensity_225,
    ~(scale(.x) %>% as.vector),
    .names = "{col}_c"))

# Combine duration and intensity trials
data <- data %>%
  mutate(all50trials = rowMeans(select(
    ., taylor_values.blastduration_ms_9_c:taylor_values.blastintensity_225_c), na.rm = TRUE))

hist(data$all50trials)

Hugues Method

# First make sure we get the calculations right
# By validating a manual confirmation of the 
# average volume and duration calculations

data <- data %>% 
  mutate(blastintensity2 = rowMeans(
    select(
      ., taylor_values.blastintensity_9:taylor_values.blastintensity_225)),
    blastduration2 = rowMeans(
    select(
      ., taylor_values.blastduration_ms_9:taylor_values.blastduration_ms_225)))

# Check outcome if it makes sense
data$blastintensity2
##   [1]  4.92  4.76  5.80  6.92  4.28  4.96  1.52  5.08  4.28  3.96  5.72  1.12
##  [13]  3.16  8.96  4.52  7.96  4.88  4.28  4.48  4.24  1.24  1.92  3.68  5.60
##  [25]  3.64  3.52  5.44  4.64  7.16  5.36  3.72  6.32  3.96  0.04  3.80  4.64
##  [37]  4.56  5.00  5.36  1.04  3.48  5.04  5.40  3.04  3.60  5.88  1.00  5.76
##  [49]  4.88  5.52  5.56  6.72  5.04  4.76  3.96  4.60  5.76  2.28  3.04  4.52
##  [61]  1.44  0.00  7.44  4.56  6.48  7.80  0.00  0.00  3.56  4.04  6.04  9.08
##  [73] 10.00  4.80  9.80  7.56  6.16  2.80  0.00  5.16  1.24  1.00  2.76  6.48
##  [85]  3.36  1.20  2.20  4.40  3.32  5.44  5.08  2.76  4.52  4.24  9.24  0.00
##  [97]  3.28  1.60  2.84  4.20  6.00  6.60  3.76  2.96  5.68  4.16  0.00  9.44
## [109]  1.76  5.36  2.28  5.56  0.00  5.20  6.00  6.56  2.96  4.88  5.72  7.20
## [121]  6.88  5.68  1.24  2.52  3.12  1.00  5.96  0.52  8.28  3.72  0.68  2.32
## [133]  2.44  5.88  5.52 10.00  5.36  9.12  3.44  5.48  6.76  2.56  5.80  9.96
## [145]  7.24  9.24  6.36  0.12  6.88  5.20  2.40  6.08  5.28  6.68  0.80  2.36
## [157]  5.64  3.80  6.00  7.72  9.44  7.84 10.00  1.24  5.68 10.00  6.04  4.52
## [169]  5.64  4.72  4.48  1.32  8.08  5.68  5.16  3.40  5.44 10.00  5.60  4.96
## [181]  2.60  1.36  9.40  9.40  4.20  1.36  4.40  1.60  7.08  5.00  5.44  0.00
## [193]  1.76  7.68  0.00  6.08  5.96  5.36  0.92  4.40  4.32  2.48  4.68  4.08
## [205]  5.20  7.04  7.48  6.08  2.76  2.36  5.68  4.68  1.44  3.40  6.56  5.04
## [217]  4.04  7.88  3.08  6.68  1.64  6.84  5.36  8.52  2.40  0.00  4.68  0.00
## [229]  7.00  4.12  3.80  8.72  6.52  2.48  0.96  4.32  6.04  1.76  2.88  6.04
## [241]  3.16  5.76  8.40  4.28  4.08  4.64  5.40  2.88  5.24  4.40  6.92  0.00
## [253]  3.56  8.40  0.16  6.56  4.08  2.12  6.08  4.00  7.44  4.80  4.72  4.68
## [265]  4.44
# Check if it is the same as the original one
data$blastintensity2 == data$blastintensity
##   [1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
##  [16] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
##  [31] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
##  [46] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
##  [61] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
##  [76] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
##  [91] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [106] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [121] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [136] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [151] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [166] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [181] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [196] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [211] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [226] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [241] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [256] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
# Success! It is the same! :)

# Let's now check for volume, if it makes sense
data$blastduration2
##   [1] 1234.4 1206.4 1033.6 1293.6 1026.8 1447.6  540.8 1233.6  993.6  827.6
##  [11] 1133.6 1220.0  880.4 1793.2 1066.0 1499.6  934.4  833.6 1000.0  873.6
##  [21]  553.6  293.6  946.0 1232.8 1094.0  973.2 1327.2 1272.8 1593.2 1220.4
##  [31] 1080.4 1352.8  673.6   20.0 1167.2 1299.6 1106.4 1292.4 1248.8  486.4
##  [41]  900.8 1021.2 1007.2  839.2  826.0 1060.4  473.6 1259.6 1133.2 1006.8
##  [51] 1228.0 1266.4 1073.2 1352.8 1032.8  920.0 1114.0  768.8  792.8 1079.2
##  [61]  574.4    0.0 1446.4 1126.4 1540.4 1619.2    0.0    0.0  758.8  992.0
##  [71]  853.2 1800.0 2000.0 1080.4 2000.0 1533.2 1400.0  680.0    0.0 1182.4
##  [81]  280.0  500.0  780.4 1113.2  827.6 1600.0  534.0  640.8  691.6 1240.4
##  [91] 1107.2  634.4 1160.0 1034.0 1706.8    0.0  806.0  608.0  806.8  999.6
## [101] 1352.0 1392.8  953.2  815.2 1286.8  967.2    0.0 1899.2  514.0 1172.8
## [111]  500.0 1274.4   80.0 1046.0 1340.0 1313.2  818.0  493.2 1294.4 1447.2
## [121] 1473.2 1220.8  507.2  813.6  506.8  500.0 1206.4   26.8 1666.4  794.0
## [131]  273.6  773.2  746.8 1433.6 1220.0 2000.0 1220.0 1867.2  927.6 1286.8
## [141]  885.6  740.0 1292.4 2000.0 1212.8 1827.2 1234.0   40.0 1454.4 1186.8
## [151]  906.4  800.0 1100.4  934.0  380.0  600.0 1426.0  681.2 1200.0 1612.8
## [161] 1733.6 1599.6 2000.0  540.0 1406.0 2000.0 1325.6 1166.8 1214.0 1118.8
## [171] 1327.2  433.6  954.0 1219.6 1120.4  840.4 1300.0 1920.0 1307.6 1073.2
## [181]  707.2  779.6 1973.2 1426.8 1040.0  541.2  900.4  566.8 1553.6 1060.4
## [191] 1159.6    0.0  740.0 1727.2    0.0 1146.8  967.2 1120.0  500.0 1146.8
## [201] 1066.8  500.0 1106.0  914.0 1160.0 1488.0 1313.2 1393.2  694.0  433.6
## [211] 1146.4  770.8  266.8  813.2 1181.2 1153.2 1513.2 1753.6  826.4 1419.6
## [221]  608.8 1534.0 1247.2 1833.6  653.2    0.0 1040.8    0.0 1500.0 1147.2
## [231]  853.6 1613.2 1347.6  747.6  500.0 1213.2 1227.2  646.8 1040.4 1305.2
## [241]  853.2 1295.2 1707.2  660.0 1086.4 1053.6 1159.6  906.0 1120.4  853.2
## [251] 1620.0    0.0  900.0 1380.4   20.0 1418.8 1019.6  693.6 1219.6 1020.0
## [261] 1213.2 1180.8 1034.0 1113.2 1026.0
# Check if it is the same as the original one
data$blastduration2 == data$blastduration
##   [1] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
##  [16] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
##  [31] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
##  [46] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
##  [61] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
##  [76] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
##  [91] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [106] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [121] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [136] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [151] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [166] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [181] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [196] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [211] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [226] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [241] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
## [256] TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE TRUE
# Extract names of the blastduration trials only
taylor_list <- data %>% 
  select(
    taylor_values.blastduration_ms_9:taylor_values.blastduration_ms_225) %>% 
  names

# Get the suffix numbers only for the trials
num_list <- lapply(taylor_list, function(x) {
  gsub("taylor_values.blastduration_ms_", "", x)
})

# Create new mixed trials blastintensity.duration
# By multiplying duration with intensity
# Resulting in a column named "mixed"
for (i in num_list) {
  data[paste0("taylor_values.mixed_", i)] <- 
    data[paste0("taylor_values.blastduration_ms_", i)] *
    data[paste0("taylor_values.blastintensity_", i)]
}

# Check first trial of mixed (composite) values to see if it makes sense
data$taylor_values.mixed_9
##   [1]  4680  8190 11700 16700 11690  4020  2500 14640  7470     0  9000  4000
##  [13]   500 20000 20000 20000  6000 14640  5000     0   830  6700  5850 18300
##  [25] 12000 20000  8000 11970  1660 16700 11690 20000  2010   500 11690 18000
##  [37]  4150 20000  2000 16000     0 10000  5000  1340  4020  5360  1340  8190
##  [49]   500  5320  9310 11970  7000 20000  7980  4150  5850  2000  2010 10500
##  [61]  1660     0 16700  5850  7000  4150     0     0  9000  2490  7500 20000
##  [73] 20000 20000 20000 20000     0  2000     0 12000     0   500 10500  9000
##  [85] 20000 20000 20000  5000  2500 16700  2010  3000 10020  5850 20000     0
##  [97]  3500  3320 13360  7980 10500  6640 10020 10500  4000 11690     0 11970
## [109]     0 20000  2500  3510     0 16700  9000 15000  1340     0 11690 20000
## [121] 20000 20000   500  1500  2000   500  7000  6700  4500  6700  2490  9000
## [133]  7000 16470 20000 20000  8300 11690 18300 11690 11700   500  9310 20000
## [145]  5000  7020     0  1000 13500 20000  6680  7000  3510  8190   500  5000
## [157] 12000  2000     0 10640  8190 10500 20000   500  4000 20000 13500  4000
## [169]  8190 20000     0 10500 16470  9310     0     0  9310     0  1340 15000
## [181]     0  1660 18000  4000  9000   500  4000  5000 20000  9000  7000     0
## [193]  2000 13500     0  4000  2680  5000     0 11970 10500   500  4680  7000
## [205]  2500 12000  5000 20000  1340  3500  2680  3350  1340 20000 11700 10500
## [217]  7320 18000   500 20000   500 20000 20000 20000     0     0 20000     0
## [229] 10500     0  3500 20000 15030 10500  2000 20000 12000  6000 16470 20000
## [241]     0  5850  9360 10500 20000 20000  9000 20000  3510 10000  5320     0
## [253]  1000 10500  2000 10530 13300   500  7000   500  5000  8190  8190 20000
## [265]  7470
nice_normality(data, variable = "taylor_values.mixed_9", histogram = TRUE)

# Check last trial of mixed (composite) values to see if it makes sense
data$taylor_values.mixed_225
##   [1] 18000  5810     0  5000  1340  8300   500  4000     0 20000 20000  2000
##  [13]  4000 18300  7980 20000 12000 11700  5000  8000   500     0 10500  7980
##  [25]  4000  2490  6650 15000 20000  2490  1660  6000 13300     0  3000 20000
##  [37]  7000  4000 20000   830  5850  5000  3340  4000  2000 20000   500 20000
##  [49] 14640  6000  1340  7980  4000 12000  4000  3320 10640  4000  2490 16470
##  [61]  2010     0 16470 20000     0 13500     0     0  5000  7980   500   500
##  [73] 20000 10500 20000 20000  2490  1000     0  5850     0   500   500  4980
##  [85]  5850     0   500  5000  2490  5850 10500  3350 20000  4000 20000     0
##  [97]  6000  1000   670  7980 20000 20000  2010  1340 20000  3320     0 16470
## [109]  2000 20000  1000 13360     0 13360 13360  6000  2490  1500 18300  6030
## [121] 20000 10000   500  3000  2010   500 20000     0 20000     0   500  2010
## [133]   500   500 10500 20000  5850 20000  1340 16470     0 18300 10500 20000
## [145]  7980 20000 10500     0 10000  4000  2490  2500  4000 20000   500     0
## [157] 16470  2000 20000 13360 20000 20000 20000  1340  7980 20000 10500 20000
## [169] 15030 20000  4680     0  6030 10500 20000  8190  8350 20000  9360  6000
## [181]  8000  1000 16470 16700  5850  1340  5850   500 20000  4000 20000     0
## [193]   500 13360     0   500  1500 20000   500 14640  2680  1500  2490  3000
## [205]  7500 16470 15000 18300  1340   500  7980     0     0   500  9360  7980
## [217]  6000 16470  5320  8190  1340  9310  7980  7020  2490     0  4000     0
## [229] 10500  4000  2490 16470  9150  5850   500  2000 10500     0  4000  6000
## [241]     0  7980 20000     0  6680  5000 20000   500  8000  3000 20000     0
## [253]  4000 13360     0 10500  4000  4000  7980  7000 16000  1500  1340  4000
## [265]  3320
nice_normality(data, variable = "taylor_values.mixed_225", histogram = TRUE)

# Calculate average of all trials! :)
data <- data %>% 
  rowwise() %>% 
  mutate(taylor_hugues = mean(taylor_values.mixed_9:taylor_values.mixed_225),
         ) %>% 
  ungroup

# Check distribution of finale variable
nice_normality(data, variable = "taylor_hugues", histogram = TRUE)

Lindsay Method

# Calculate log of duration
# We also have to add a constant of one to avoid 
# getting infinite values when taking the log...
data <- data %>% 
  mutate(across(
    taylor_values.blastduration_ms_9:taylor_values.blastduration_ms_225,
    function(x) {log(x +1)}, .names = "log_{col}" ))

# Resulting in a column named "mixed", but with log
for (i in num_list) {
  data[paste0("log_taylor_values.mixed_", i)] <- 
    data[paste0("log_taylor_values.blastduration_ms_", i)] *
    data[paste0("taylor_values.blastintensity_", i)]
}

data$log_taylor_values.mixed_108
##   [1] 13.017538 28.262453 76.014023 34.543774 27.635019 76.014023  6.216606
##   [8] 27.635019 20.167889 40.335779 34.543774  6.908755 27.635019 76.014023
##  [15] 27.635019 36.569434 34.543774  6.216606 27.635019 13.017538  6.216606
##  [22]  6.216606 13.017538 51.197208 13.017538 34.543774 20.167889 60.100940
##  [29] 69.087548 76.014023 35.328067 35.968429 12.433212  0.000000 33.613149
##  [36] 28.774743  7.193686 43.162115 59.369420  6.216606 20.167889 27.635019
##  [43] 24.866424 20.167889 14.131227 27.635019  6.216606 20.167889 13.017538
##  [50] 45.561384 13.017538 65.824981 21.941660 20.167889 20.726264 19.526307
##  [57] 27.635019 13.017538 20.167889 18.649818  6.508769  0.000000 31.083031
##  [64] 35.328067 41.452529 51.197208  0.000000  0.000000 12.433212 27.635019
##  [71] 43.516243 76.014023 76.014023 27.635019 76.014023 59.369420 76.014023
##  [78]  6.216606  0.000000 35.328067  0.000000  6.216606 13.017538 48.361283
##  [85] 12.433212  0.000000  0.000000 26.035077 20.167889 13.017538 27.635019
##  [92] 18.649818 51.197208 27.635019 76.014023  0.000000 13.445260  6.508769
##  [99] 20.167889 27.635019  7.512618 67.613558 27.635019 13.445260 51.197208
## [106]  6.216606  0.000000 76.014023 13.017538 35.328067 12.433212 27.635019
## [113]  0.000000 50.355801 35.328067 43.162115 20.167889 12.433212 35.328067
## [120] 13.017538 43.162115 41.452529  6.216606  7.065613 12.433212  6.216606
## [127] 76.014023  0.000000 76.014023 34.543774  0.000000 20.726264 28.262453
## [134] 59.369420 35.328067 76.014023 20.167889 76.014023 51.197208 35.328067
## [141] 43.162115  6.216606 20.167889 76.014023 58.511095 59.369420 55.270038
## [148]  0.000000 51.948243 35.328067 13.445260 24.866424 50.355801 65.087691
## [155]  6.216606  6.216606 28.774743 18.649818 76.014023 43.162115 76.014023
## [162] 59.369420 76.014023  6.216606 28.774743 76.014023 43.162115 27.635019
## [169] 13.017538 20.167889 27.635019 13.017538 49.732849 43.162115 19.526307
## [176] 59.369420 27.635019 76.014023 35.328067 34.543774 13.017538 12.433212
## [183] 76.014023 75.126175 20.167889  6.216606 13.017538  6.216606 38.007012
## [190] 48.361283 36.569434  0.000000 13.817510 76.014023  0.000000 38.007012
## [197] 67.613558 34.543774  6.216606 35.328067 27.635019 18.649818 28.774743
## [204] 13.017538 19.526307 51.197208 43.162115 76.014023 19.526307 20.167889
## [211] 34.543774 33.613149  6.216606 12.433212 35.328067 26.890519 29.255547
## [218] 60.100940 41.452529 35.968429 13.017538 27.635019 43.162115 76.014023
## [225]  0.000000  0.000000 13.445260  0.000000 51.197208 28.262453 14.627774
## [232] 59.369420 50.355801  6.216606  6.216606 13.817510 51.197208  6.216606
## [239]  6.508769 43.162115 38.007012 35.328067 76.014023 13.445260  6.216606
## [246] 26.035077 43.162115 20.167889 49.459294 76.014023 53.209816  0.000000
## [253] 27.635019 40.335779  0.000000 38.007012 13.017538  6.216606 27.635019
## [260]  6.908755 76.014023 44.527065 28.262453 20.167889 13.817510
# Calculate average of all trials! :)
data <- data %>% 
  rowwise() %>% 
  mutate(taylor_lindsay = mean(
    log_taylor_values.mixed_9:log_taylor_values.mixed_225),
         #.after = taylor_values.blastintensity_225
         ) %>% 
  ungroup

# Check distribution of finale variable
nice_normality(data, variable = "taylor_lindsay", histogram = TRUE)

Carnagey Method

# Calculate squared root of duration
data <- data %>% 
  mutate(across(
    taylor_values.blastduration_ms_9:taylor_values.blastduration_ms_225,
    function(x) {sqrt(x)}, .names = "sqrt_{col}" ))

# Resulting in a column named "mixed", but with log
for (i in num_list) {
  data[paste0("sqrt_taylor_values.mixed_", i)] <- 
    data[paste0("sqrt_taylor_values.blastduration_ms_", i)] *
    data[paste0("taylor_values.blastintensity_", i)]
}

data$sqrt_taylor_values.mixed_108
##   [1]  51.76872 136.82105 447.21360 158.11388 126.49111 447.21360  22.36068
##   [8] 126.49111  86.42916 172.85832 158.11388  31.62278 126.49111 447.21360
##  [15] 126.49111 193.64917 158.11388  22.36068 126.49111  51.76872  22.36068
##  [22]  22.36068  51.76872 271.10883  51.76872 158.11388  86.42916 342.22799
##  [29] 316.22777 447.21360 171.02631 182.34583  44.72136   0.00000 144.04860
##  [36] 145.87666  36.46917 218.81499 326.92507  22.36068  86.42916 126.49111
##  [43]  89.44272  86.42916  68.41053 126.49111  22.36068  86.42916  51.76872
##  [50] 181.19051  51.76872 348.56850 116.18950  86.42916  94.86833  77.65307
##  [57] 126.49111  51.76872  86.42916  67.08204  25.88436   0.00000 111.80340
##  [64] 171.02631 189.73666 271.10883   0.00000   0.00000  44.72136 126.49111
##  [71] 156.52476 447.21360 447.21360 126.49111 447.21360 326.92507 447.21360
##  [78]  22.36068   0.00000 171.02631   0.00000  22.36068  51.76872 221.35944
##  [85]  44.72136   0.00000   0.00000 103.53743  86.42916  51.76872 126.49111
##  [92]  67.08204 271.10883 126.49111 447.21360   0.00000  57.61944  25.88436
##  [99]  86.42916 126.49111  42.77850 385.00649 126.49111  57.61944 271.10883
## [106]  22.36068   0.00000 447.21360  51.76872 171.02631  44.72136 126.49111
## [113]   0.00000 255.28416 171.02631 218.81499  86.42916  44.72136 171.02631
## [120]  51.76872 218.81499 189.73666  22.36068  34.20526  44.72136  22.36068
## [127] 447.21360   0.00000 447.21360 158.11388   0.00000  94.86833 136.82105
## [134] 326.92507 171.02631 447.21360  86.42916 447.21360 271.10883 171.02631
## [141] 218.81499  22.36068  86.42916 447.21360 309.83867 326.92507 252.98221
## [148]   0.00000 286.05943 171.02631  57.61944  89.44272 255.28416 258.84358
## [155]  22.36068  22.36068 145.87666  67.08204 447.21360 218.81499 447.21360
## [162] 326.92507 447.21360  22.36068 145.87666 447.21360 218.81499 126.49111
## [169]  51.76872  86.42916 126.49111  51.76872 178.88544 218.81499  77.65307
## [176] 326.92507 126.49111 447.21360 171.02631 158.11388  51.76872  44.72136
## [183] 447.21360 427.78499  86.42916  22.36068  51.76872  22.36068 223.60680
## [190] 221.35944 193.64917   0.00000  63.24555 447.21360   0.00000 223.60680
## [197] 385.00649 158.11388  22.36068 171.02631 126.49111  67.08204 145.87666
## [204]  51.76872  77.65307 271.10883 218.81499 447.21360  77.65307  86.42916
## [211] 158.11388 144.04860  22.36068  44.72136 171.02631 115.23888 154.91933
## [218] 342.22799 189.73666 182.34583  51.76872 126.49111 218.81499 447.21360
## [225]   0.00000   0.00000  57.61944   0.00000 271.10883 136.82105  77.45967
## [232] 326.92507 255.28416  22.36068  22.36068  63.24555 271.10883  22.36068
## [239]  25.88436 218.81499 223.60680 171.02631 447.21360  57.61944  22.36068
## [246] 103.53743 218.81499  86.42916 239.43684 447.21360 313.04952   0.00000
## [253] 126.49111 172.85832   0.00000 223.60680  51.76872  22.36068 126.49111
## [260]  31.62278 447.21360 245.19380 136.82105  86.42916  63.24555
# Calculate average of all trials! :)
data <- data %>% 
  rowwise() %>% 
  mutate(taylor_carnagey = mean(
    sqrt_taylor_values.mixed_9:sqrt_taylor_values.mixed_225),
         #.after = taylor_values.blastintensity_225
         ) %>% 
  ungroup

# Check distribution of finale variable
nice_normality(data, variable = "taylor_carnagey", histogram = TRUE)

Reliability

Now that we have reversed our items, we can get the alphas for our different scales.

Self-control

x <- data %>% 
  select(BSCS_1r:BSCS_7r) %>% 
  alpha
x$feldt
## 
##    95% confidence boundaries (Feldt)
##  lower alpha upper
##    0.8  0.84  0.86

Trait aggression

x <- data %>% 
  select(BAQ_1r:BAQ_12r) %>% 
  alpha
x$feldt
## 
##    95% confidence boundaries (Feldt)
##  lower alpha upper
##   0.77  0.81  0.84

Trait Mindfulness

x <- data %>% 
  select(KIMS_1r:KIMS_39r) %>% 
  alpha
## Warning in alpha(.): Some items were negatively correlated with the total scale and probably 
## should be reversed.  
## To do this, run the function again with the 'check.keys=TRUE' option
## Some items ( KIMS_19r ) were negatively correlated with the total scale and 
## probably should be reversed.  
## To do this, run the function again with the 'check.keys=TRUE' option
x$feldt
## 
##    95% confidence boundaries (Feldt)
##  lower alpha upper
##   0.87  0.89  0.91

We are getting the “Some items ( KIMS_17r KIMS_19r ) were negatively correlated with the total scale and probably should be reversed.” warning. Let’s check these items. In the raw data, item 17 is:

  1. “I notice how foods and drinks affect my thoughts, bodily sensations, and emotions.”

And item 19 is:

  1. “When I do things, I get totally wrapped up in them and don’t think about anything else.”

Which corresponds to items 17 and 19 as described in the original paper (Baer, Smith, & Allen in 2004). They also make sense to not reverse-score, theoretically speaking; they seem to relate to mindfulness, not its absence. So it is kind of strange that psych::alpha is suggesting to reverse-score these items. Perhaps the scale is not meant to be used as a single factor since it contains four factor subscales.

Note: before the imputation, this warning was worse in the sense that a lot more items were flagged in this way: “Some items ( KIMS_1r KIMS_5r KIMS_9r KIMS_13r KIMS_17r KIMS_19r KIMS_21r KIMS_25r KIMS_33r KIMS_37r KIMS_38r ) were negatively correlated with the total scale and probably should be reversed.”

t-tests

In this section, we will: (a) test assumptions of normality, (b) transform variables violating assumptions, (c) test assumptions of homoscedasticity, (d) identify and winsorize outliers, and (e) conduct the t-tests.

Normality

lapply(col.list, function(x) 
  nice_normality(data, 
                 variable = x, 
                 title = x,
                 group = "condition",
                 shapiro = TRUE,
                 histogram = TRUE))
## [[1]]

## 
## [[2]]

## 
## [[3]]

## 
## [[4]]

## 
## [[5]]

## 
## [[6]]

## 
## [[7]]

## 
## [[8]]

## 
## [[9]]

## 
## [[10]]

## 
## [[11]]

## 
## [[12]]

## 
## [[13]]

## 
## [[14]]

## 
## [[15]]

Several variables are clearly skewed. Let’s apply transformations. But first, let’s deal with the working memory task, SOPT (Self-Ordered Pointing Task). It is clearly problematic.

SOPT Exclusions

Let’s do an histogram proper to see if it helps diagnosing the problem with SOPT.

hist(data$SOPT)

That looks weird, there’s some obvious outliers here; they probably didn’t do the task correctly, especially since there’s a gap after 60 errors. Let’s see how many people made more than 60 errors.

data %>% 
  filter(SOPT > 60) %>% 
  count
n
11

There’s 10 people with more than 60 errors. Let’s exclude them.

# data <- data %>% 
#   filter(SOPT <= 60)
cat(report_participants(data, threshold = 1))

265 participants (Gender: 55.8% women, 29.4% men, 0.00% non-binary, 14.72% missing; Country: 95.09% USA, 1.51% missing, 1.13% Canada, 2.26% other)

Transformation

The function below transforms variables according to the best possible transformation (via the bestNormalize package), and also standardizes the variables.

predict_bestNormalize <- function(var) {
  x <- bestNormalize(var, standardize = FALSE, allow_orderNorm = FALSE)
  print(cur_column())
  print(x$chosen_transform)
  cat("\n")
  predict(x)
}

set.seed(42)
data <- data %>% 
  mutate(across(all_of(col.list), 
                predict_bestNormalize,
                .names = "{.col}.t"))
## [1] "blastintensity"
## I(x) Transformation with 265 nonmissing obs.
## 
## [1] "blastduration"
## I(x) Transformation with 265 nonmissing obs.
## 
## [1] "blastintensity.duration"
## Non-Standardized sqrt(x + a) Transformation with 265 nonmissing obs.:
##  Relevant statistics:
##  - a = 0 
##  - mean (before standardization) = 68.3635 
##  - sd (before standardization) = 32.41978 
## 
## [1] "blastintensity.first"
## Non-Standardized Yeo-Johnson Transformation with 265 nonmissing obs.:
##  Estimated statistics:
##  - lambda = 0.9886136 
##  - mean (before standardization) = 5.840111 
##  - sd (before standardization) = 3.351916 
## 
## [1] "blastduration.first"
## I(x) Transformation with 265 nonmissing obs.
## 
## [1] "blastintensity.duration.first"
## Non-Standardized sqrt(x + a) Transformation with 265 nonmissing obs.:
##  Relevant statistics:
##  - a = 0 
##  - mean (before standardization) = 79.96841 
##  - sd (before standardization) = 45.02731 
## 
## [1] "all50trials"
## Non-Standardized asinh(x) Transformation with 265 nonmissing obs.:
##  Relevant statistics:
##  - mean (before standardization) = 0.00400018 
##  - sd (before standardization) = 0.626307 
## 
## [1] "taylor_hugues"
## Non-Standardized sqrt(x + a) Transformation with 265 nonmissing obs.:
##  Relevant statistics:
##  - a = 0 
##  - mean (before standardization) = 81.73025 
##  - sd (before standardization) = 36.85809 
## 
## [1] "taylor_lindsay"
## I(x) Transformation with 265 nonmissing obs.
## 
## [1] "taylor_carnagey"
## I(x) Transformation with 265 nonmissing obs.
## 
## [1] "KIMS"
## Non-Standardized Log_b(x + a) Transformation with 265 nonmissing obs.:
##  Relevant statistics:
##  - a = 0 
##  - b = 10 
##  - mean (before standardization) = 0.5309176 
##  - sd (before standardization) = 0.06041606 
## 
## [1] "BSCS"
## I(x) Transformation with 265 nonmissing obs.
## 
## [1] "BAQ"
## Non-Standardized sqrt(x + a) Transformation with 265 nonmissing obs.:
##  Relevant statistics:
##  - a = 0 
##  - mean (before standardization) = 1.731679 
##  - sd (before standardization) = 0.2764399 
## 
## [1] "SOPT"
## Non-Standardized asinh(x) Transformation with 265 nonmissing obs.:
##  Relevant statistics:
##  - mean (before standardization) = 3.166081 
##  - sd (before standardization) = 0.7976016 
## 
## [1] "IAT"
## Non-Standardized Yeo-Johnson Transformation with 265 nonmissing obs.:
##  Estimated statistics:
##  - lambda = 0.7604596 
##  - mean (before standardization) = -0.5440929 
##  - sd (before standardization) = 0.398327
col.list <- paste0(col.list, ".t")

Note. The I(x) transformations above are actually not transformations, but a shorthand function for passing the data “as is”. Suggesting the package estimated the various attempted transformations did not improve normality in those cases, so no transformation is used. This only appears when standardize is set to FALSE. When set to TRUE, for those variables, it is actually center_scale(x), suggesting that the data are only CENTERED because they need no transformation (no need to be scaled), only to be centered.

Let’s check if normality was corrected.

# Group normality
lapply(col.list, function(x) 
  nice_normality(data, 
                 x, 
                 "condition",
                 shapiro = TRUE,
                 title = x,
                 histogram = TRUE))
## [[1]]

## 
## [[2]]

## 
## [[3]]

## 
## [[4]]

## 
## [[5]]

## 
## [[6]]

## 
## [[7]]

## 
## [[8]]

## 
## [[9]]

## 
## [[10]]

## 
## [[11]]

## 
## [[12]]

## 
## [[13]]

## 
## [[14]]

## 
## [[15]]

Looks rather reasonable now, though not perfect (fortunately t-tests are quite robust against violations of normality).

We can now resume with the next step: checking variance.

Homoscedasticity

# Plotting variance
plots(lapply(col.list, function(x) {
  nice_varplot(data, x, group = "condition")
  }),
  n_columns = 3)

Variance looks good. No group has four times the variance of any other group. We can now resume with checking outliers.

Outliers

We check outliers visually with the plot_outliers function, which draws red lines at +/- 3 median absolute deviations.

plots(lapply(col.list, function(x) {
  plot_outliers(data, x, group = "condition", ytitle = x, binwidth = 0.2)
  }),
  n_columns = 2)

There are some outliers, but nothing unreasonable. Let’s still check with the 3 median absolute deviations (MAD) method.

data %>% 
  as.data.frame %>% 
  filter(condition == "Control") %>% 
  find_mad(col.list, criteria = 3)
## 8 outlier(s) based on 3 median absolute deviations for variable(s): 
##  blastintensity.t, blastduration.t, blastintensity.duration.t, blastintensity.first.t, blastduration.first.t, blastintensity.duration.first.t, all50trials.t, taylor_hugues.t, taylor_lindsay.t, taylor_carnagey.t, KIMS.t, BSCS.t, BAQ.t, SOPT.t, IAT.t 
## 
## Outliers per variable: 
## 
## $SOPT.t
##   Row SOPT.t_mad
## 1  18   3.907513
## 2  19  -5.584099
## 3  31   3.729806
## 4  36   3.907513
## 5  38   3.907513
## 6  51  -3.992921
## 7  75   3.907513
## 
## $IAT.t
##   Row IAT.t_mad
## 1  20 -3.022411
data %>% 
  as.data.frame %>% 
  filter(condition == "Mindfulness") %>% 
  find_mad(col.list, criteria = 3)
## 8 outlier(s) based on 3 median absolute deviations for variable(s): 
##  blastintensity.t, blastduration.t, blastintensity.duration.t, blastintensity.first.t, blastduration.first.t, blastintensity.duration.first.t, all50trials.t, taylor_hugues.t, taylor_lindsay.t, taylor_carnagey.t, KIMS.t, BSCS.t, BAQ.t, SOPT.t, IAT.t 
## 
## Outliers per variable: 
## 
## $SOPT.t
##   Row SOPT.t_mad
## 1   5   3.457244
## 2  40  -3.843806
## 3  47   3.474755
## 4  57   3.457244
## 5  60   3.474755
## 6  78   3.403583
## 7  92  -5.317793
## 8 130   3.474755

After our transformations, there are 9 outliers in the control group, and 9 outliers in the mindfulness group.

Winsorization

Visual assessment and the MAD method confirm we have some outlier values. We could ignore them but because they could have disproportionate influence on the models, one recommendation is to winsorize them by bringing the values at 3 SD. Instead of using the standard deviation around the mean, however, we use the absolute deviation around the median, as it is more robust to extreme observations. For a discussion, see:

Leys, C., Klein, O., Bernard, P., & Licata, L. (2013). Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median. Journal of Experimental Social Psychology, 49(4), 764–766. https://doi.org/10.1016/j.jesp.2013.03.013

# Winsorize variables of interest with MAD
data <- data %>% 
  group_by(condition) %>% 
  mutate(across(all_of(col.list), 
                winsorize_mad,
                .names = "{.col}.w")) %>% 
  ungroup()

# Update col.list
col.list <- paste0(col.list, ".w")

Outliers are still present but were brought back within reasonable limits, where applicable.

Standardization

We can now standardize our variables.

data <- data %>%
  mutate(across(all_of(col.list),
                function(x) {
                  as.numeric(scale(x))
                  },
                .names = "{col}.s"))

# Update col.list
col.list <- paste0(col.list, ".s")

We are now ready to compare the group condition (Control vs. Mindfulness Priming) across our different variables with the t-tests.

t-tests

nice_t_test(data, 
            response = col.list, 
            group = "condition") %>% 
  nice_table(highlight = 0.10, width = .80)
## Using Welch t-test (base R's default; cf. https://doi.org/10.5334/irsp.82).
## For the Student t-test, use `var.equal = TRUE`. 
##  

Interpretation: There is no clear group effect from our experimental condition on our different variables. There used to be a marginal effect on blastintensity, but it seems these have moved to the BAQ and IAT. But we know that the experimental manipulation could not have influenced these tasks, because they happen chronologically earlier. We still look at blastintensity visually for theoretical reasons and help choose the best measure.

Violin plots

Blast Intensity

nice_violin(data, 
            group = "condition", 
            response = "blastintensity.t.w.s",
            comp1 = 1,
            comp2 = 2,
            obs = TRUE,
            has.d = TRUE,
            d.y = 1)

Blast Duration

nice_violin(data, 
            group = "condition", 
            response = "blastduration.t.w.s",
            comp1 = 1,
            comp2 = 2,
            obs = TRUE,
            has.d = TRUE,
            d.y = 1)

Intensity * Duration

nice_violin(data, 
            group = "condition", 
            response = "blastintensity.duration.t.w.s",
            comp1 = 1,
            comp2 = 2,
            obs = TRUE,
            has.d = TRUE,
            d.y = 1)

1st Intensity

nice_violin(data, 
            group = "condition", 
            response = "blastintensity.first.t.w.s",
            comp1 = 1,
            comp2 = 2,
            obs = TRUE,
            has.d = TRUE,
            d.y = -0.8)

1st Duration

nice_violin(data, 
            group = "condition", 
            response = "blastduration.first.t.w.s",
            comp1 = 1,
            comp2 = 2,
            obs = TRUE,
            has.d = TRUE,
            d.y = -1.5)

1st Intensity * Duration

nice_violin(data, 
            group = "condition", 
            response = "blastintensity.duration.first.t.w.s",
            comp1 = 1,
            comp2 = 2,
            obs = TRUE,
            has.d = TRUE,
            d.y = -1.2)

50 trials

nice_violin(data, 
            group = "condition", 
            response = "all50trials.t.w.s",
            comp1 = 1,
            comp2 = 2,
            obs = TRUE,
            has.d = TRUE,
            d.y = 1)

Taylor Hugues

nice_violin(data, 
            group = "condition", 
            response = "taylor_hugues.t.w.s",
            comp1 = 1,
            comp2 = 2,
            obs = TRUE,
            has.d = TRUE,
            d.y = 1)

Taylor Lindsay

nice_violin(data, 
            group = "condition", 
            response = "taylor_lindsay.t.w.s",
            comp1 = 1,
            comp2 = 2,
            obs = TRUE,
            has.d = TRUE,
            d.y = 1)

Taylor Carnagey

nice_violin(data, 
            group = "condition", 
            response = "taylor_carnagey.t.w.s",
            comp1 = 1,
            comp2 = 2,
            obs = TRUE,
            has.d = TRUE,
            d.y = 1)

Means, SD

Let’s extract the means and standard deviations for journal reporting.

Blast Intensity

data %>% 
    group_by(condition) %>% 
    summarize(M = mean(blastintensity),
              SD = sd(blastintensity),
              N = n()) %>% 
  nice_table(width = 0.40)

Blast Duration

data %>% 
    group_by(condition) %>% 
    summarize(M = mean(blastduration),
              SD = sd(blastduration),
              N = n()) %>% 
  nice_table(width = 0.40)

Intensity * Duration

data %>% 
    group_by(condition) %>% 
    summarize(M = mean(blastintensity.duration),
              SD = sd(blastintensity.duration),
              N = n()) %>% 
  nice_table(width = 0.40)

1st Intensity

data %>% 
    group_by(condition) %>% 
    summarize(M = mean(blastintensity.first),
              SD = sd(blastintensity.first),
              N = n()) %>% 
  nice_table(width = 0.40)

1st Duration

data %>% 
    group_by(condition) %>% 
    summarize(M = mean(blastduration.first),
              SD = sd(blastduration.first),
              N = n()) %>% 
  nice_table(width = 0.40)

1st Intensity * Duration

data %>% 
    group_by(condition) %>% 
    summarize(M = mean(blastintensity.duration.first),
              SD = sd(blastintensity.duration.first),
              N = n()) %>% 
  nice_table(width = 0.40)

50 trials

data %>% 
    group_by(condition) %>% 
    summarize(M = mean(all50trials),
              SD = sd(all50trials),
              N = n()) %>% 
  nice_table(width = 0.40)

Taylor Hugues

data %>% 
    group_by(condition) %>% 
    summarize(M = mean(taylor_hugues),
              SD = sd(taylor_hugues),
              N = n()) %>% 
  nice_table(width = 0.40)

Moderations

Let’s see if our variables don’t interact together with our experimental condition. But first, let’s test the models assumptions.

Assumptions

Blast Intensity

big.mod1 <- lm(blastintensity.t.w.s ~ condition_dum*KIMS.t.w.s + 
                 condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s + 
                 condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s, 
               data = data, na.action="na.exclude")
check_model(big.mod1)

Blast Duration

big.mod2 <- lm(blastduration.t.w.s ~ condition_dum*KIMS.t.w.s + 
                 condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s + 
                 condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s, 
               data = data, na.action="na.exclude")
check_model(big.mod2)

Intensity * Duration

big.mod3 <- lm(blastintensity.duration.t.w.s ~ condition_dum*KIMS.t.w.s + 
                 condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s + 
                 condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s, 
               data = data, na.action="na.exclude")
check_model(big.mod3)

1st Intensity

big.mod4 <- lm(blastintensity.first.t.w.s ~ condition_dum*KIMS.t.w.s + 
                 condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s + 
                 condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s, 
               data = data, na.action="na.exclude")
check_model(big.mod4)

1st Duration

big.mod5 <- lm(blastduration.first.t.w.s ~ condition_dum*KIMS.t.w.s + 
                 condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s + 
                 condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s, 
               data = data, na.action="na.exclude")
check_model(big.mod5)

1st Intensity * Duration

big.mod6 <- lm(blastintensity.duration.first.t.w.s ~ condition_dum*KIMS.t.w.s + 
                 condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s + 
                 condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s, 
               data = data, na.action="na.exclude")
check_model(big.mod6)

50 trials

big.mod7 <- lm(all50trials.t.w.s ~ condition_dum*KIMS.t.w.s + 
                 condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s + 
                 condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s, 
               data = data, na.action="na.exclude")
check_model(big.mod7)

Taylor Hugues

big.mod8 <- lm(taylor_hugues.t.w.s ~ condition_dum*KIMS.t.w.s + 
                 condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s + 
                 condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s, 
               data = data, na.action="na.exclude")
check_model(big.mod8)

Taylor Lindsay

big.mod9 <- lm(taylor_lindsay.t.w.s ~ condition_dum*KIMS.t.w.s + 
                 condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s + 
                 condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s, 
               data = data, na.action="na.exclude")
check_model(big.mod9)

Taylor Carnagey

big.mod10 <- lm(taylor_carnagey.t.w.s ~ condition_dum*KIMS.t.w.s + 
                 condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s + 
                 condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s, 
               data = data, na.action="na.exclude")
check_model(big.mod10)

All the models assumptions look pretty good overall actually, even with all these variables. The lines for linearity and homoscedasticity are a bit skewed but nothing too crazy. Let’s now look at the results.

Moderations

Blast Intensity

big.mod1 %>% 
  nice_lm(b.label = "B") %>% 
  nice_table(highlight = TRUE)

Blast Duration

big.mod2 %>% 
  nice_lm(b.label = "B") %>% 
  nice_table(highlight = TRUE)

Intensity * Duration

big.mod3 %>% 
  nice_lm(b.label = "B") %>% 
  nice_table(highlight = TRUE)

1st Intensity

big.mod4 %>% 
  nice_lm(b.label = "B") %>% 
  nice_table(highlight = TRUE)

1st Duration

big.mod5 %>% 
  nice_lm(b.label = "B") %>% 
  nice_table(highlight = TRUE)

1st Intensity * Duration

big.mod6 %>% 
  nice_lm(b.label = "B") %>% 
  nice_table(highlight = TRUE)

All 50 trials

big.mod7 %>% 
  nice_lm(b.label = "B") %>% 
  nice_table(highlight = TRUE)

Taylor Hugues

big.mod8 %>% 
  nice_lm(b.label = "B") %>% 
  nice_table(highlight = TRUE)

Taylor Lindsay

big.mod9 %>% 
  nice_lm(b.label = "B") %>% 
  nice_table(highlight = TRUE)

Taylor Carnagey

big.mod10 %>% 
  nice_lm(b.label = "B") %>% 
  nice_table(highlight = TRUE)

Model Steph

big.mod11 <- lm(blastintensity.duration.t.w.s ~ condition_dum*KIMS.t.w.s + 
                 condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s + 
                 condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s +
                 blastintensity.t.w.s + blastduration.t.w.s, 
               data = data, na.action="na.exclude")
check_model(big.mod11)

big.mod11 %>% 
  nice_lm(b.label = "B") %>% 
  nice_table(highlight = TRUE)
big.mod12 <- lm(blastduration.t.w.s ~ condition_dum*KIMS.t.w.s + 
                 condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s + 
                 condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s +
                  blastintensity.t.w.s, 
               data = data, na.action="na.exclude")
big.mod12 %>% 
  nice_lm(b.label = "B") %>% 
  nice_table(highlight = TRUE)
big.mod13 <- lm(taylor_var ~ condition_dum*KIMS.t.w.s + 
                 condition_dum*BSCS.t.w.s + condition_dum*BAQ.t.w.s + 
                 condition_dum*SOPT.t.w.s + condition_dum*IAT.t.w.s, 
               data = data, na.action="na.exclude")
## Error in eval(predvars, data, env): object 'taylor_var' not found
big.mod13 %>% 
  nice_lm(b.label = "B") %>% 
  nice_table(highlight = TRUE)
## Error in ifelse(class(model) == "list", models.list <- model, models.list <- list(model)): object 'big.mod13' not found

Interpretation: The condition by trait self-control (brief self-control scale, BSCS) interaction comes up for all variables (so it must be somewhat reliable).

Interaction plots

Let’s plot the main significant interaction(s).

Blast Intensity

interact_plot(big.mod1, pred = "condition_dum", modx = "BSCS.t.w.s", 
              modxvals = NULL, interval = TRUE, x.label = "Condition", 
              pred.labels = c("Control", "Mindfulness"),
              legend.main = "Trait Self-Control")

Blast Duration

interact_plot(big.mod2, pred = "condition_dum", modx = "BSCS.t.w.s", 
              modxvals = NULL, interval = TRUE, x.label = "Condition", 
              pred.labels = c("Control", "Mindfulness"),
              legend.main = "Trait Self-Control")

Intensity * Duration

interact_plot(big.mod3, pred = "condition_dum", modx = "BSCS.t.w.s", 
              modxvals = NULL, interval = TRUE, x.label = "Condition", 
              pred.labels = c("Control", "Mindfulness"),
              legend.main = "Trait Self-Control")

1st Intensity

interact_plot(big.mod4, pred = "condition_dum", modx = "BSCS.t.w.s", 
              modxvals = NULL, interval = TRUE, x.label = "Condition", 
              pred.labels = c("Control", "Mindfulness"),
              legend.main = "Trait Self-Control")

1st Duration

interact_plot(big.mod5, pred = "condition_dum", modx = "BSCS.t.w.s", 
              modxvals = NULL, interval = TRUE, x.label = "Condition", 
              pred.labels = c("Control", "Mindfulness"),
              legend.main = "Trait Self-Control")

1st Intensity * Duration

interact_plot(big.mod6, pred = "condition_dum", modx = "BSCS.t.w.s", 
              modxvals = NULL, x.label = "Condition", interval = TRUE, 
              pred.labels = c("Control", "Mindfulness"), 
              legend.main = "Trait Self-Control")

50 trials

interact_plot(big.mod7, pred = "condition_dum", modx = "BSCS.t.w.s", 
              modxvals = NULL, interval = TRUE, x.label = "Condition", 
              pred.labels = c("Control", "Mindfulness"),
              legend.main = "Trait Self-Control")

Taylor Hugues

interact_plot(big.mod8, pred = "condition_dum", modx = "BSCS.t.w.s", 
              modxvals = NULL, interval = TRUE, x.label = "Condition", 
              pred.labels = c("Control", "Mindfulness"),
              legend.main = "Trait Self-Control")

Interpretation: The interaction is pretty much the same for all models. Counterintuitively, for people with low self-control, the priming mindfulness condition relates to lower aggression relative to the control condition. In contrast, for people with high self-control, the priming mindfulness condition relates to higher aggression.

Simple slopes

Let’s look at the simple slopes now (only for the significant interaction).

Blast Intensity

big.mod1 %>%
  nice_lm_slopes(predictor = "condition_dum",
                 moderator = "BSCS.t.w.s",
                 b.label = "B") %>% 
  nice_table(highlight = TRUE)

Blast Duration

big.mod2 %>%
  nice_lm_slopes(predictor = "condition_dum",
                 moderator = "BSCS.t.w.s",
                 b.label = "B") %>% 
  nice_table(highlight = TRUE)

Intensity * Duration

big.mod3 %>%
  nice_lm_slopes(predictor = "condition_dum",
                 moderator = "BSCS.t.w.s",
                 b.label = "B") %>% 
  nice_table(highlight = TRUE)

1st Intensity

big.mod4 %>%
  nice_lm_slopes(predictor = "condition_dum",
                 moderator = "BSCS.t.w.s",
                 b.label = "B") %>% 
  nice_table(highlight = TRUE)

1st Duration

big.mod5 %>%
  nice_lm_slopes(predictor = "condition_dum",
                 moderator = "BSCS.t.w.s",
                 b.label = "B") %>% 
  nice_table(highlight = TRUE)

1st Intensity * Duration

big.mod6 %>%
  nice_lm_slopes(predictor = "condition_dum",
                 moderator = "BSCS.t.w.s",
                 b.label = "B") %>% 
  nice_table(highlight = TRUE)

All 50 trials

big.mod7 %>%
  nice_lm_slopes(predictor = "condition_dum",
                 moderator = "BSCS.t.w.s",
                 b.label = "B") %>% 
  nice_table(highlight = TRUE)

Taylor Hugues

big.mod8 %>%
  nice_lm_slopes(predictor = "condition_dum",
                 moderator = "BSCS.t.w.s",
                 b.label = "B") %>% 
  nice_table(highlight = TRUE)

Interpretation: The effect of priming mindfulness on blast intensity is only significant for people with a high self-control.

Conclusions

Based on the results, it seems that the interaction of interest comes up for all six measures of blast aggression (intensity, duration, the combination of the two, and the first blast of each type), suggesting it is reliable.

The effect sizes are slightly lower for measures of first blast (sr2 = 0.02) than the average intensity or duration (sr2 = 0.035), or intensity * duration (sr2 = 0.04).

Therefore, based on the marginally larger effect size, perhaps it does make sense to use the intensity * duration combination in future studies. My intuition is also that the effect is more reliable for reactive aggression (all trials) than proactive aggression (first measure only). Another reason to avoid using only the first trials is that they lead to problems with the distributions (i.e., they are not normal and are difficult to transform to normality).

There is another reason why using the product of intensity and duration might make more theoretical sense. Technically speaking, if a participant sets the intensity to 10, but duration to 0, there is no actual sound blast for that trial. Similarly, if a participant sets the duration to 10, but intensity to 0, there is no actual sound blast for that trial. Taking the product of intensity and duration takes this dynamic into account. In contrast, other methods using the sum of intensity and duration, or yet again the average of all 50 trials (intensity and duration) does not. Taking the average of that trial with intensity = 10 and duration = 0 would yield 5, whereas it should be 0 because no sound was actually administered for that trial.

Preregistration

Recommandation

For the preregistration, I would suggest committing to using the product of the averages of intensity and duration of all trials, and optimally transform it to normality via the bestNormalize package, as suggested above, and using the same design.

This strategy is similar to those who use the product of intensity with the log of duration, so the following papers could be cited (next tab).

Citable papers

Volume x Duration, average of all trials (25)

Arriaga, P., Monteiro, M. B., & Esteves, F. (2011). Effects of playing violent computer games on emotional desensitization and aggressive behavior. Journal of Applied Social Psychology, 41(8), 1900-1925. https://doi.org/10.1111/j.1559-1816.2011.00791.x

Participants’ ratings of the two indexes of noise (i.e., intensity, duration) that were administered to the opponent were multiplied to compute a global measure of aggressive behavior (see Bartholow et al., 2005).

Volume x Duration, multiplied averages of all trials (25)

Bartholow, B. D., Sestir, M. A., & Davis, E. B. (2005). Correlates and consequences of exposure to video game violence: Hostile personality, empathy, and aggressive behavior. Personality and Social Psychology Bulletin, 31(11), 1573-1586. https://doi.org/10.1177/0146167205277205

In this study, average noise intensity and duration settings were multiplied to form a composite aggressive behavior score.

Volume x log(Duration), average of all trials (25) / linear/quadratic contrasts across all trials (25)

Lindsay, J. J. & Anderson, C. A. (2000). From antecedent conditions to violent actions: A general affective aggression model. Personality and Social Psychology Bulletin, 26(5), 533-547. https://doi.org/10.1177/0146167200267002

The duration settings (hold times) were found to be positively skewed. We reduced this skew by subjecting the duration settings to a log transformation. For each participant, we multiplied the transformed duration setting by the intensity setting for each trial separately.

Volume x √Duration, average of all trials (25)

Carnagey, N. L. & Anderson, C. A. (2005). The effects of reward and punishment in violent video games on aggressive affect, cognition, and behavior. Psychological Science, 16(11), 882-889. https://doi.org/10.1111/j.1467-9280.2005.01632.x

An aggressive-energy score was calculated for each trial by taking the square root of the duration of noise chosen for the opponent and multiplying this value by the intensity of the noise chosen.

Last minute exploration

One question that arises is whether we should include any other variables other than the BSCS since this is the only variable of interest giving an interesting effect. When defining the sample size for the preregistration, we got an sr2 of the primary model (blast * intensity) of .04, which is kind of large for these effects. However, that is for the model that controls for many other variables and interactions. When only using BSCS in the model, the effect is smaller, .03, thus changing the required sample size for the power analysis.

However, even if we were to include those variables, perhaps we need not include them as interaction terms, but simply as covariates. Let’s test this below by redefining model 3 again with all other variables as covariates.

Edit: after some exploration, the most parsimonious model seems to be to control for KIMS, but not only for KIMS, but also for its interaction with the condition.

big.mod3 <- lm(blastintensity.duration.t.w.s ~
                 condition_dum*BSCS.t.w.s, 
               data = data, na.action = "na.exclude")
big.mod3 %>% 
    nice_lm(b.label = "B") %>% 
    nice_table(highlight = TRUE)
big.mod3 <- lm(blastintensity.duration.t.w.s ~
                 condition_dum*KIMS.t.w.s +  
                 condition_dum*BSCS.t.w.s, 
               data = data, na.action = "na.exclude")
big.mod3 %>% 
    nice_lm(b.label = "B") %>% 
    nice_table(highlight = TRUE)

Then we still get the sr2 = .04, without having to worry about administrating the SOPT, IAT, BAQ, etc.

So first study reports the large models with all variables, and the second study only the BSCS, along with the KIMS.

(Checking three-way interaction again just in case:)

big.mod <- lm(blastintensity.duration.t.w.s ~
                condition_dum*KIMS.t.w.s +  
                condition_dum*BSCS.t.w.s*BAQ.t.w.s, 
               data = data, na.action = "na.exclude")
big.mod %>% 
    nice_lm(b.label = "B") %>% 
    nice_table(highlight = TRUE)

Package References

report::cite_packages(sessionInfo())
---
title: '**Priming Mindfulness Project**'
subtitle: Comparison & analysis report
author: "Rémi Thériault"
date: "`r format(Sys.Date())`"
output:
  html_document:
    theme: cerulean
    highlight: pygments
    toc: yes
    toc_depth: 3
    toc_float: yes
    number_sections: no
    df_print: kable
    code_folding: show # or: hide
    code_download: yes
    anchor_sections:
      style: symbol
---

```{r setup, warning=FALSE, message=TRUE, include=FALSE}
fast <- FALSE  # Make this true to skip the chunks
```

```{r klippy, echo=FALSE, include=TRUE}
klippy::klippy(position = c('top', 'right'))
```

# Introduction

This report describes the results of the first priming mindfulness study. It was initially made using [Dominique Makowski's](https://dominiquemakowski.github.io/post/2021-02-10-template_results/) Supplementary Materials template. Two similar templates will be created based on the two online pilot studies from the Varela project.

This is an **exploratory** (not confirmatory) study. The result of this exploration will be used to conduct a second, confirmatory, preregistered study.

---
Note also that this data has been cleaned beforehand. Five datasets were merged (joined) through an inner join—3 Qualtrics surveys and 2 Inquisit tasks—so as to keep only participants who at least participated at each step of the study. Missing data will be imputed later on. Duplicates were addressed with the `rempsyc::best_duplicate` function, which keeps the duplicate with the least amount of missing values, and in case of ties, takes the first occurrence.


# Packages & Data

## Packages

```{r warning=FALSE, message=FALSE, results='asis'}
library(rempsyc)
library(dplyr)
library(interactions)
library(performance)
library(see)
library(ggplot2)
library(report)
library(bestNormalize)
library(psych)
library(visdat)
library(missForest)
library(doParallel)

summary(report(sessionInfo()))
```

## Data

```{r warning=FALSE, message=TRUE, results='asis'}
# Read data
data <- read.table("data/fulldataset.txt", sep = "\t", header = TRUE)

# Dummy-code group variable
data <- data %>% 
  mutate(condition_dum = ifelse(condition == "Mindfulness", 1, 0),
         condition = as.factor(condition))

cat(report_participants(data, threshold = 1))

# Allocation ratio
report(data$condition)

```

### Taylor Task... {.tabset}

At this stage, we define a list of our relevant variables.

#### Preparation

```{r warning=FALSE, message=TRUE, results='asis'}
# Make list of DVs
col.list <- c("blastintensity", "blastduration", "blastintensity.duration",
              "blastintensity.first", "blastduration.first", 
              "blastintensity.duration.first", "all50trials",
              "taylor_hugues", "taylor_lindsay", "taylor_carnagey", "KIMS", 
              "BSCS", "BAQ", "SOPT", "IAT")

```

#### Blast Intensity * Duration

Why combine the intensity and duration scores? Should we? For a discussion, see:

Elson, M., Mohseni, M. R., Breuer, J., Scharkow, M., & Quandt, T. (2014). Press CRTT to measure aggressive behavior: the unstandardized use of the competitive reaction time task in aggression research. *Psychological assessment*, *26*(2), 419. https://doi.org/10.1037/a0035569

* Bushman and Baumeister (1998) used the sum of volume and duration settings in the first of 25 trials [p. 3]
* Lindsay and Anderson (2000) multiplied volume with log-transformed duration settings. The average over 25 trials of those products was their measure for overall aggression.
* Carnagey and Anderson (2005) averaged the products of volume and the square root of duration to form a single “aggressive energy score” (p. 887). No reason is given for this other than the claim that this single score supposedly is a valid measure and that duration should be square rooted.
* Bartholow, Sestir, and Davis (2005) multiplied the average volume and duration settings to form a composite aggressive behavior score. Although Bartholow, Bushman, and Sestir (2006) also used volume and duration settings, they standardized and summed the two parameters instead of multiplying them.
* Sometimes the option of setting the volume and/or duration to zero as a way to act nonaggressively is provided.  Including settings of zero as an option also raises further questions, for example, how to handle trials in which participants set only one of the two intensity parameters to zero. [**Note: we do have zero as option**]
* With regard to the analysis, there is no definitive answer to the question of how to calculate aggression scores, or whether different scores might measure different types of aggression, as long as none of them have been properly validated. As it seems that volume and duration do not measure the exact same construct, it is advisable to consider them as separate measures for related subdimensions of aggression.

#### First sound blast

Why use the first sound blast only instead of the average of all trials? Should we?

According to some, the Taylor Aggression Paradigm is not a measure of aggression per say, but of reactive aggression, because participants react to the other "participant's" aggression. They suggest that for a pure measure of aggression, it is recommended to use only the first sound blast used by the participant before he receives one himself. At this stage, we attempt the analyses with all these different measures of aggression for exploratory purposes. See earlier reference to Elson et al. (2014):

  * If researchers are interested in measuring unprovoked aggression, they should also look at the settings in the first trial. Those studying provoked aggression or retaliation, on the other hand, should focus on all trials except the first one.

#### Problems in measurement

We note that Lobbestael et al. (2021) suggests, based on factor analysis, separately averaging all preprovocation versus all postprovocation trials. However, this recommendation applies to the 30-trial version (we have the 25 trial version). They add:

- In contrast to the 30-trial version, the 25-trial CRTT version showed to be underlined by one single factor, clustering the duration and volume of both provoked and unprovoked trials, irrespective of whether these were preceded by win or lose experiences.

Lobbestael, J., Emmerling, F., Brugman, S., Broers, N., Sack, A. T., Schuhmann, T., ... & Arntz, A. (2021). Toward a more valid assessment of behavioral aggression: An open source platform and an empirically derived scoring method for using the Competitive Reaction Time Task (CRTT). *Assessment*, *28*(4), 1065-1079. https://doi.org/10.1177/1073191120959757

Therefore, it is assumed safe to use and combine all 25 trials. However, we also note (again) the very high heterogeneity in quantification strategies:

- **Why the Flexibility is Problematic** \n There are currently 130 publications in which results are based on the CRTT, and they reported 157 different quantification strategies in total! Given the number of different versions of the CRTT measure that can be extracted from its use in a study, it is very easy for a researcher to analyze several (or several dozen) versions of the CRTT outcome measures in a study, running hypothesis tests with one version of the measure after another until a version is found that produces the desired pattern of results. Given that the measure has been used in several dozen different ways in the published research, and often in multiple ways by the same authors (and sometimes even in multiple ways for different analyses within the same paper), it appears likely that selective reporting of results after exploring analyses with multiple versions of the CRTT is not uncommon. Even if multiple quantifications used are reported in a paper, it is often not clear how to interpret the findings given that in many cases one quantification strategy does not have greater validity than others.

See: https://www.flexiblemeasures.com/crtt/

Thus the recommandations are:

1. Preregistration of Analysis Plan
2. Preregistration of Decision Tree (e.g., regarding data transformation)
3. Defense of Selected Quantification Strategy

See: https://www.flexiblemeasures.com/crtt/index.php?menu=recommendations

#### Average of all 50 trials

After discussion with Dr. David Chester, it was also suggested to use the average of all 50 standardized trials, as in Chester & Lasko 2019 and Lasko & Chester (2022).

Chester & Lasko (2019). Validating a standardized approach to the Taylor Aggression Paradigm. *Social Psychological and Personality Science*, *10*(5), 620-631. https://doi.org/10.1177/1948550618775408

Lasko & Chester (2022). Measurement invariance and item response theory analysis of the taylor aggression paradigm. *Assessment*, *29*(5), 981-992. https://doi.org/10.1177/1073191121996450

Still, he also notes,

> In one of our papers, we show that the major approaches to scoring the CRTT typically arrive at the same result, so in the end, the scoring strategy you choose is unlikely to have a large effect on your findings.

That said, technically speaking, if a participant sets the intensity to 10, but duration to 0, there is no actual sound blast for that trial. Similarly, if a participant sets the duration to 10, but intensity to 0, there is no actual sound blast for that trial. Taking the product of intensity and duration takes this dynamic into account. In contrast, other methods using the sum of intensity and duration, or yet again the average of all 50 trials (intensity and duration) does not. That said, perhaps it is not a big deal given that this particular scenario (including 0 for one of the two settings) is probably rare.

Edit: Following another discussion, Dr. Chester pointed out that it is possible to measure how often this scenario of mismatching zero intensity occurs. Let's test this right here.

```{r}
# Blastintensity == 0
data %>% 
  filter(blastintensity == 0) %>% 
  summarize(percent = round(sum(blastduration !=0)/nrow(data) * 100, 2))

# Blastduration == 0
data %>% 
  filter(blastduration == 0) %>% 
  summarize(percent = round(sum(blastintensity !=0)/nrow(data) * 100, 2))
```

So we have 0.35% and 0% of the data in this scenario, respectively.

Dr. Chester also recommends a better (but more complex) approach:

> If it's possible, a superior approach to this aggregate scoring strategy is to employ multilevel modeling instead of a univariate analysis. When you aggregate across all 50 of the individual CRTT measures, you are losing a lot of information/variability/statistical-power. Multilevel modeling on the non-aggregated data allows you to retain this variability.

Although it is not clear so far if this technique can be applied to our particular situation. Further study will be required.

#### Hugues Leduc Method

Hugues Leduc suggested the possibility to use instead a two-step approach. First, calculate the average of volume and duration, for each trial. In the second step, calculate the average of the 25 trials of this new volume*duration composite. This should result in a score different than simply using the product of the average of all duration trials and of the average of all volume trials. We now add this method to the method comparison.

Talking with Stéphane, we also agreed to test the following two related methods:

The average of 25 composite trial scores, themselves made from the volume multiplied by the:

- log-transformed duration (Lindsay and Anderson (2000) method)

- square root duration (Carnagey and Anderson (2005) method)

# Data cleaning

```{r child=if (fast == FALSE) '0_cleaning.Rmd', eval = TRUE}
```

# t-tests

```{r child=if (fast == FALSE) '1_t_tests.Rmd', eval = TRUE}
```

# Moderations

```{r child=if (fast == FALSE) '2_moderations.Rmd', eval = TRUE}
```

# Conclusions

Based on the results, it seems that the interaction of interest comes up for all six measures of blast aggression (intensity, duration, the combination of the two, and the first blast of each type), suggesting it is reliable.

The effect sizes are slightly lower for measures of first blast (*sr2* = 0.02) than the average intensity or duration (*sr2* = 0.035), or intensity * duration (*sr2* = 0.04).

Therefore, based on the marginally larger effect size, perhaps it does make sense to use the intensity * duration combination in future studies. My intuition is also that the effect is more reliable for reactive aggression (all trials) than proactive aggression (first measure only). Another reason to avoid using only the first trials is that they lead to problems with the distributions (i.e., they are not normal and are difficult to transform to normality).

There is another reason why using the product of intensity and duration might make more theoretical sense. Technically speaking, if a participant sets the intensity to 10, but duration to 0, there is no actual sound blast for that trial. Similarly, if a participant sets the duration to 10, but intensity to 0, there is no actual sound blast for that trial. Taking the product of intensity and duration takes this dynamic into account. In contrast, other methods using the sum of intensity and duration, or yet again the average of all 50 trials (intensity and duration) does not. Taking the average of that trial with intensity = 10 and duration = 0 would yield 5, whereas it should be 0 because no sound was actually administered for that trial.

## Preregistration {.tabset}

### Recommandation
For the preregistration, I would suggest committing to using the product of the averages of intensity and duration of all trials, and optimally transform it to normality via the `bestNormalize` package, as suggested above, and using the same design.

This strategy is similar to those who use the product of intensity with the log of duration, so the following papers could be cited (next tab).

### Citable papers

**Volume x Duration, average of all trials (25)**

Arriaga, P., Monteiro, M. B., & Esteves, F. (2011). Effects of playing violent computer games on emotional desensitization and aggressive behavior. *Journal of Applied Social Psychology*, *41*(8), 1900-1925. https://doi.org/10.1111/j.1559-1816.2011.00791.x

> Participants' ratings of the two indexes of noise (i.e., intensity, duration) that were administered to the opponent were multiplied to compute a global measure of aggressive behavior (see Bartholow et al., 2005).

**Volume x Duration, multiplied averages of all trials (25)**

Bartholow, B. D., Sestir, M. A., & Davis, E. B. (2005). Correlates and consequences of exposure to video game violence: Hostile personality, empathy, and aggressive behavior. *Personality and Social Psychology Bulletin*, *31*(11), 1573-1586. https://doi.org/10.1177/0146167205277205

> In this study, average noise intensity and duration settings were multiplied to form a composite aggressive behavior score.

**Volume x log(Duration), average of all trials (25) / linear/quadratic contrasts across all trials (25)**

Lindsay, J. J. & Anderson, C. A. (2000). From antecedent conditions to violent actions: A general affective aggression model. *Personality and Social Psychology Bulletin*, *26*(5), 533-547.  https://doi.org/10.1177/0146167200267002

> The duration settings (hold times) were found to be positively skewed. We reduced this skew by subjecting the duration settings to a log transformation. For each participant, we multiplied the transformed duration setting by the intensity setting for each trial separately.

**Volume x √Duration, average of all trials (25)**

Carnagey, N. L. & Anderson, C. A. (2005). The effects of reward and punishment in violent video games on aggressive affect, cognition, and behavior. *Psychological Science*, *16*(11), 882-889. https://doi.org/10.1111/j.1467-9280.2005.01632.x

> An aggressive-energy score was calculated for each trial by taking the square root of the duration of noise chosen for the opponent and multiplying this value by the intensity of the noise chosen.

### Last minute exploration

One question that arises is whether we should include any other variables other than the BSCS since this is the only variable of interest giving an interesting effect. When defining the sample size for the preregistration, we got an sr2 of the primary model (blast * intensity) of .04, which is kind of large for these effects. However, that is for the model that controls for many other variables and interactions. When only using BSCS in the model, the effect is smaller, .03, thus changing the required sample size for the power analysis.

However, even if we were to include those variables, perhaps we need not include them as interaction terms, but simply as covariates. Let's test this below by redefining model 3 again with all other variables as covariates.

Edit: after some exploration, the most parsimonious model seems to be to control for KIMS, but not only for KIMS, but also for its interaction with the condition.

```{r}
big.mod3 <- lm(blastintensity.duration.t.w.s ~
                 condition_dum*BSCS.t.w.s, 
               data = data, na.action = "na.exclude")
big.mod3 %>% 
    nice_lm(b.label = "B") %>% 
    nice_table(highlight = TRUE)

big.mod3 <- lm(blastintensity.duration.t.w.s ~
                 condition_dum*KIMS.t.w.s +  
                 condition_dum*BSCS.t.w.s, 
               data = data, na.action = "na.exclude")
big.mod3 %>% 
    nice_lm(b.label = "B") %>% 
    nice_table(highlight = TRUE)
```

Then we still get the sr2 = .04, without having to worry about administrating the SOPT, IAT, BAQ, etc.

So first study reports the large models with all variables, and the second study only the BSCS, along with the KIMS.

(Checking three-way interaction again just in case:)

```{r}
big.mod <- lm(blastintensity.duration.t.w.s ~
                condition_dum*KIMS.t.w.s +  
                condition_dum*BSCS.t.w.s*BAQ.t.w.s, 
               data = data, na.action = "na.exclude")
big.mod %>% 
    nice_lm(b.label = "B") %>% 
    nice_table(highlight = TRUE)
```


# Package References

```{r warning=FALSE, message=FALSE, results='asis'}
report::cite_packages(sessionInfo())
```
